Automating Program Speedup by Deciding What to Cache
نویسندگان
چکیده
A common program optimization strategy is to eliminate recomputation by caching and reusing results. We analyze the problems involved in automating this strategy: deciding which computations are safe to cache, transforming the rest of the program to make them safe, choosing the most cost-effective ones to cache, and maintaining the optimized code. The analysis extends previous work on caching by considering side effects, shared data structures, program edits, and the acceptability of behavior changes caused by caching. The paper explores various techniques for solving these problems and attempts to make explicit the assumptions on which they depend. An experimental prototype incorporates many of these techniques.
منابع مشابه
A Systematic Measurement of the Influence of Non-Uniform Cache Sharing on the Performance of Modern Multithreaded Programs
Most modern Chip Multiprocessors (CMP) feature shared cache on chip, whose influence on the performance of multithreaded programs, unfortunately, remains unclear due to the limited coverage of the deciding factors in prior studies. In this work, we conduct a systematic measurement of the influence using a recently released CMP benchmark suite, PARSEC, with a spectrum of factors considered. The ...
متن کاملA Systematic Study of Cache Side Channels Across AES Implementations
While the AES algorithm is regarded as secure, many implementations of AES are prone to cache side-channel attacks. The lookup tables traditionally used in AES implementations for storing precomputed results provide speedup for encryption and decryption. How such lookup tables are used is known to affect the vulnerability to side channels, but the concrete effects in actual AES implementations ...
متن کاملOptimizing DDA Code on a POWER5 Processor
In this paper we take an existing scientific computation code, DDA, and optimize it to run on an IBM Power5 processor. The DDA code, originally developed by a Ph.D. candidate in physics, suffers from excessive execution time caused by a high number of cache accesses and a low rate of instructions per cycle. Our goal is to improve the code’s performance by making a series of optimizations in a s...
متن کاملCompaction Techniques Post - Pass
ion is vital in order to achieve good compaction. With respect to side effects on performance, we made the following observations. First, the compacted programs become 2%-30% faster, with the speedup averaging approximately 11%. The main reason for this speedup is that, with code abstraction limited to infrequently executed code only, the whole-program optimizations results in far less instruct...
متن کاملPredicting Instruction Cache Behavior
It has been claimed that the execution time of a program can often be predicted more accurately on an uncached system than on a system with cache memory 5, 20]. Thus, caches are often disabled for critical real-time tasks to ensure the predictability required for scheduling analysis. This work shows that instruction caching can be exploited to gain execution speed without sacriicing predictabil...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1985